Overview
Skills
Job Details
Job Description
Design, build, and manage scalable and maintainable ETL/ELT pipelines using Apache Airflow and Python.
Integrate data from multiple sources (databases, APIs, flat files, cloud services, etc.) into data warehouses or data lakes.
Develop reusable Python modules and scripts to support data transformation and validation.
Collaborate with data engineers, analysts, and platform teams to understand integration requirements and deliver solutions.
Monitor, debug, and improve Airflow DAGs for performance, reliability, and fault tolerance.
Ensure data integrity and compliance with enterprise data governance policies.
Maintain documentation for integration workflows and operational procedures.
Strong proficiency in Python with experience writing modular, testable, and reusable code.
Hands-on experience with Apache Airflow (writing custom DAGs, operators, and managing schedules).
Experience with data integration, ETL/ELT processes, and working with structured/unstructured data.
Familiarity with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and writing efficient SQL queries.
Experience integrating with REST APIs, file systems (SFTP/FTP), and cloud storage (S3, Azure Blob, etc.).
Understanding of CI/CD practices and version control (e.g., Git).
Strong problem-solving skills and attention to detail.
Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries of the position across all US locations. Within the range, individual pay is determined by work location and additional job-related factors, including knowledge, skills, experience, tenure and relevant education or training. The pay scale is subject to change depending on business needs. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Additional compensation may include benefits, discretionary bonuses, and equity.